Goto

Collaborating Authors

 ml workload


Delta Sum Learning: an approach for fast and global convergence in Gossip Learning

Goethals, Tom, Sebrechts, Merlijn, De Schrijver, Stijn, De Turck, Filip, Volckaert, Bruno

arXiv.org Artificial Intelligence

Abstract--Federated Learning is a popular approach for distributed learning due to its security and computational benefits. With the advent of powerful devices in the network edge, Gossip Learning further decentralizes Federated Learning by removing centralized integration and relying fully on peer to peer updates. However, the averaging methods generally used in both Federated and Gossip Learning are not ideal for model accuracy and global convergence. Additionally, there are few options to deploy Learning workloads in the edge as part of a larger application using a declarative approach such as Kubernetes manifests. This paper proposes Delta Sum Learning as a method to improve the basic aggregation operation in Gossip Learning, and implements it in a decentralized orchestration framework based on Open Application Model, which allows for dynamic node discovery and intent-driven deployment of multi-workload applications. Evaluation results show that Delta Sum performance is on par with alternative integration methods for 10 node topologies, but results in a 58% lower global accuracy drop when scaling to 50 nodes. Overall, it shows strong global convergence and a logarithmic loss of accuracy with increasing topology size compared to a linear loss for alternatives under limited connectivity.


Detecting Anomalies in Machine Learning Infrastructure via Hardware Telemetry

Chen, Ziji, Chien, Steven W. D., Qian, Peng, Zilberman, Noa

arXiv.org Artificial Intelligence

Modern machine learning (ML) has grown into a tightly coupled, full-stack ecosystem that combines hardware, software, network, and applications. Many users rely on cloud providers for elastic, isolated, and cost-efficient resources. Unfortunately, these platforms as a service use virtualization, which means operators have little insight into the users' workloads. This hinders resource optimizations by the operator, which is essential to ensure cost efficiency and minimize execution time. In this paper, we argue that workload knowledge is unnecessary for system-level optimization. We propose Reveal, which takes a hardware-centric approach, relying only on hardware signals - fully accessible by operators. Using low-level signals collected from the system, Reveal detects anomalies through an unsupervised learning pipeline. The pipeline is developed by analyzing over 30 popular ML models on various hardware platforms, ensuring adaptability to emerging workloads and unknown deployment patterns. Using Reveal, we successfully identified both network and system configuration issues, accelerating the DeepSeek model by 5.97%.


Evolving HPC services to enable ML workloads on HPE Cray EX

Schuppli, Stefano, Mohamed, Fawzi, Mendonça, Henrique, Mujkanovic, Nina, Palme, Elia, Conciatore, Dino, Drescher, Lukas, Gila, Miguel, Witlox, Pim, VandeVondele, Joost, Martinasso, Maxime, Schulthess, Thomas C., Hoefler, Torsten

arXiv.org Artificial Intelligence

The Alps Research Infrastructure leverages GH200 technology at scale, featuring 10,752 GPUs. Accessing Alps provides a significant computational advantage for researchers in Artificial Intelligence (AI) and Machine Learning (ML). While Alps serves a broad range of scientific communities, traditional HPC services alone are not sufficient to meet the dynamic needs of the ML community. This paper presents an initial investigation into extending HPC service capabilities to better support ML workloads. We identify key challenges and gaps we have observed since the early-access phase (2023) of Alps by the Swiss AI community and propose several technological enhancements. These include a user environment designed to facilitate the adoption of HPC for ML workloads, balancing performance with flexibility; a utility for rapid performance screening of ML applications during development; observability capabilities and data products for inspecting ongoing large-scale ML workloads; a utility to simplify the vetting of allocated nodes for compute readiness; a service plane infrastructure to deploy various types of workloads, including support and inference services; and a storage infrastructure tailored to the specific needs of ML workloads. These enhancements aim to facilitate the execution of ML workloads on HPC systems, increase system usability and resilience, and better align with the needs of the ML community. We also discuss our current approach to security aspects. This paper concludes by placing these proposals in the broader context of changes in the communities served by HPC infrastructure like ours.


Towards Easy and Realistic Network Infrastructure Testing for Large-scale Machine Learning

Yoo, Jinsun, Lao, ChonLam, Cao, Lianjie, Lantz, Bob, Yu, Minlan, Krishna, Tushar, Sharma, Puneet

arXiv.org Artificial Intelligence

This paper lays the foundation for Genie, a testing framework that captures the impact of real hardware network behavior on ML workload performance, without requiring expensive GPUs. Genie uses CPU-initiated traffic over a hardware testbed to emulate GPU to GPU communication, and adapts the ASTRA-sim simulator to model interaction between the network and the ML workload.


Machine Learning Fleet Efficiency: Analyzing and Optimizing Large-Scale Google TPU Systems with ML Productivity Goodput

Wongpanich, Arissa, Oguntebi, Tayo, Paredes, Jose Baiocchi, Wang, Yu Emma, Phothilimthana, Phitchaya Mangpo, Mitra, Ritwika, Zhou, Zongwei, Kumar, Naveen, Reddi, Vijay Janapa

arXiv.org Artificial Intelligence

Recent years have seen the emergence of machine learning (ML) workloads deployed in warehouse-scale computing (WSC) settings, also known as ML fleets. As the computational demands placed on ML fleets have increased due to the rise of large models and growing demand for ML applications, it has become increasingly critical to measure and improve the efficiency of such systems. However, there is not yet an established methodology to characterize ML fleet performance and identify potential performance optimizations accordingly. This paper presents a large-scale analysis of an ML fleet based on Google's TPUs, introducing a framework to capture fleet-wide efficiency, systematically evaluate performance characteristics, and identify optimization strategies for the fleet. We begin by defining an ML fleet, outlining its components, and analyzing an example Google ML fleet in production comprising thousands of accelerators running diverse workloads. Our study reveals several critical insights: first, ML fleets extend beyond the hardware layer, with model, data, framework, compiler, and scheduling layers significantly impacting performance; second, the heterogeneous nature of ML fleets poses challenges in characterizing individual workload performance; and third, traditional utilization-based metrics prove insufficient for ML fleet characterization. To address these challenges, we present the "ML Productivity Goodput" (MPG) metric to measure ML fleet efficiency. We show how to leverage this metric to characterize the fleet across the ML system stack. We also present methods to identify and optimize performance bottlenecks using MPG, providing strategies for managing warehouse-scale ML systems in general. Lastly, we demonstrate quantitative evaluations from applying these methods to a real ML fleet for internal-facing Google TPU workloads, where we observed tangible improvements.


The Hidden Bloat in Machine Learning Systems

Zhang, Huaifeng, Ali-Eldin, Ahmed

arXiv.org Artificial Intelligence

Software bloat refers to code and features that is not used by a software during runtime. For Machine Learning (ML) systems, bloat is a major contributor to their technical debt leading to decreased performance and resource wastage. In this work, we present, Negativa-ML, a novel tool to identify and remove bloat in ML frameworks by analyzing their shared libraries. Our approach includes novel techniques to detect and locate unnecessary code within device code - a key area overlooked by existing research, which focuses primarily on host code. We evaluate Negativa-ML using four popular ML frameworks across ten workloads over 300 shared libraries. The results demonstrate that the ML frameworks are highly bloated on both the device and host code side. On average, Negativa-ML reduces the device code size in these frameworks by up to 75% and the host code by up to 72%, resulting in total file size reductions of up to 55%. The device code is a primary source of bloat within ML frameworks. Through debloating, we achieve reductions in peak host memory usage, peak GPU memory usage, and execution time by up to 74.6%, 69.6%, and 44.6%, respectively. Software bloat can cause a range of issues, including decreased performance, increased resource usage, Machine learning (ML) has affected many industries significantly.


Towards Universal Performance Modeling for Machine Learning Training on Multi-GPU Platforms

Lin, Zhongyi, Sun, Ning, Bhattacharya, Pallab, Feng, Xizhou, Feng, Louis, Owens, John D.

arXiv.org Artificial Intelligence

Characterizing and predicting the training performance of modern machine learning (ML) workloads on compute systems with compute and communication spread between CPUs, GPUs, and network devices is not only the key to optimization and planning but also a complex goal to achieve. The primary challenges include the complexity of synchronization and load balancing between CPUs and GPUs, the variance in input data distribution, and the use of different communication devices and topologies (e.g., NVLink, PCIe, network cards) that connect multiple compute devices, coupled with the desire for flexible training configurations. Built on top of our prior work for single-GPU platforms, we address these challenges and enable multi-GPU performance modeling by incorporating (1) data-distribution-aware performance models for embedding table lookup, and (2) data movement prediction of communication collectives, into our upgraded performance modeling pipeline equipped with inter-and intra-rank synchronization for ML workloads trained on multi-GPU platforms. Beyond accurately predicting the per-iteration training time of DLRM models with random configurations with a geomean error of 5.21% on two multi-GPU platforms, our prediction pipeline generalizes well to other types of ML workloads, such as Transformer-based NLP models with a geomean error of 3.00%. Moreover, even without actually running ML workloads like DLRMs on the hardware, it is capable of generating insights such as quickly selecting the fastest embedding table sharding configuration (with a success rate of 85%).


I/O in Machine Learning Applications on HPC Systems: A 360-degree Survey

Lewis, Noah, Bez, Jean Luca, Byna, Suren

arXiv.org Artificial Intelligence

Because of the increased popularity of Machine Learning (ML) workloads, there is a rising demand for I/O systems that can effectively accommodate their distinct I/O access patterns. Write operation bursts commonly dominate traditional workloads; however, ML workloads are usually read-intensive and use many small files [99]. Due to the absence of a well-established consensus on the preferred I/O stack for ML workloads, numerous developers resort to crafting their own ad-hoc algorithms and storage systems to cater to the specific requirements of their applications [50]. This can result in sub-optimal application performance due to the under-utilization of the storage system, prompting the necessity for novel I/O optimization methods tailored to the demands of ML workloads. In Figure 1, we show the evolving I/O stack used for running ML workloads (on the right side) in comparison with the traditional HPC I/O stack (on the left side). Traditional HPC I/O stack has been developed to support massive parallelism. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.


Optimize AI/ML workloads for sustainability: Part 3, deployment and monitoring

#artificialintelligence

We're celebrating Earth Day 2022 from 4/22 through 4/29 with posts that highlight how to build, maintain, and refine your workloads for sustainability. AWS estimates that inference (the process of using a trained machine learning [ML] algorithm to make a prediction) makes up 90 percent of the cost of an ML model. Given with AWS you pay for what you use, we estimate that inference also generally equates to most of the resource usage within an ML lifecycle. In Part 3, our final piece in the series, we show you how to reduce the environmental impact of your ML workload once your model is in production. If you missed the first parts of this series, in Part 1, we showed you how to examine your workload to help you 1) evaluate the impact of your workload, 2) identify alternatives to training your own model, and 3) optimize data processing.

  Genre: Play > Prospect > Charge (0.36)
  Industry:

Ray, the machine learning tech behind OpenAI, levels up to Ray 2.0

#artificialintelligence

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Over the last two years, one of the most common ways for organizations to scale and run increasingly large and complex artificial intelligence (AI) workloads has been with the open-source Ray framework, used by companies from OpenAI to Shopify and Instacart. Ray enables machine learning (ML) models to scale across hardware resources and can also be used to support MLops workflows across different ML tools. Ray 1.0 came out in September 2020 and has had a series of iterations over the last two years. Today, the next major milestone was released, with the general availability of Ray 2.0 at the Ray Summit in San Francisco.